Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 251
Filter
Add filters

Document Type
Year range
1.
Proceedings of SPIE - The International Society for Optical Engineering ; 12602, 2023.
Article in English | Scopus | ID: covidwho-20245409

ABSTRACT

Nowadays, with the outbreak of COVID-19, the prevention and treatment of COVID-19 has gradually become the focus of social disease prevention, and most patients are also more concerned about the symptoms. COVID-19 has symptoms similar to the common cold, and it cannot be diagnosed based on the symptoms shown by the patient, so it is necessary to observe medical images of the lungs to finally determine whether they are COVID-19 positive. As the number of patients with symptoms similar to pneumonia increases, more and more medical images of the lungs need to be generated. At the same time, the number of physicians at this stage is far from meeting the needs of patients, resulting in patients unable to detect and understand their own conditions in time. In this regard, we have performed image augmentation, data cleaning, and designed a deep learning classification network based on the data set of COVID-19 lung medical images. accurate classification judgment. The network can achieve 95.76% classification accuracy for this task through a new fine-tuning method and hyperparameter tuning we designed, which has higher accuracy and less training time than the classic convolutional neural network model. © 2023 SPIE.

2.
ACM International Conference Proceeding Series ; : 419-426, 2022.
Article in English | Scopus | ID: covidwho-20244497

ABSTRACT

The size and location of the lesions in CT images of novel corona virus pneumonia (COVID-19) change all the time, and the lesion areas have low contrast and blurred boundaries, resulting in difficult segmentation. To solve this problem, a COVID-19 image segmentation algorithm based on conditional generative adversarial network (CGAN) is proposed. Uses the improved DeeplabV3+ network as a generator, which enhances the extraction of multi-scale contextual features, reduces the number of network parameters and improves the training speed. A Markov discriminator with 6 fully convolutional layers is proposed instead of a common discriminator, with the aim of focusing more on the local features of the CT image. By continuously adversarial training between the generator and the discriminator, the network weights are optimised so that the final segmented image generated by the generator is infinitely close to the ground truth. On the COVID-19 CT public dataset, the area under the curve of ROC, F1-Score and dice similarity coefficient achieved 96.64%, 84.15% and 86.14% respectively. The experimental results show that the proposed algorithm is accurate and robust, and it has the possibility of becoming a safe, inexpensive, and time-saving medical assistant tool in clinical diagnosis, which provides a reference for computer-aided diagnosis. © 2022 ACM.

3.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12465, 2023.
Article in English | Scopus | ID: covidwho-20243842

ABSTRACT

This paper introduces the improved method for the COVID-19 classification based on computed tomography (CT) volumes using a combination of a complex-architecture convolutional neural network (CNN) and orthogonal ensemble networks (OEN). The novel coronavirus disease reported in 2019 (COVID-19) is still spreading worldwide. Early and accurate diagnosis of COVID-19 is required in such a situation, and the CT scan is an essential examination. Various computer-aided diagnosis (CAD) methods have been developed to assist and accelerate doctors' diagnoses. Although one of the effective methods is ensemble learning, existing methods combine some major models which do not specialize in COVID-19. In this study, we attempted to improve the performance of a CNN for the COVID-19 classification based on chest CT volumes. The CNN model specializes in feature extraction from anisotropic chest CT volumes. We adopt the OEN, an ensemble learning method considering inter-model diversity, to boost its feature extraction ability. For the experiment, We used chest CT volumes of 1283 cases acquired in multiple medical institutions in Japan. The classification result on 257 test cases indicated that the combination could improve the classification performance. © 2023 SPIE.

4.
ACM International Conference Proceeding Series ; 2022.
Article in English | Scopus | ID: covidwho-20243833

ABSTRACT

The COVID-19 pandemic still affects most parts of the world today. Despite a lot of research on diagnosis, prognosis, and treatment, a big challenge today is the limited number of expert radiologists who provide diagnosis and prognosis on X-Ray images. Thus, to make the diagnosis of COVID-19 accessible and quicker, several researchers have proposed deep-learning-based Artificial Intelligence (AI) models. While most of these proposed machine and deep learning models work in theory, they may not find acceptance among the medical community for clinical use due to weak statistical validation. For this article, radiologists' views were considered to understand the correlation between the theoretical findings and real-life observations. The article explores Convolutional Neural Network (CNN) classification models to build a four-class viz. "COVID-19", "Lung Opacity", "Pneumonia", and "Normal"classifiers, which also provide the uncertainty measure associated with each class. The authors also employ various pre-processing techniques to enhance the X-Ray images for specific features. To address the issues of over-fitting while training, as well as to address the class imbalance problem in our dataset, we use Monte Carlo dropout and Focal Loss respectively. Finally, we provide a comparative analysis of the following classification models - ResNet-18, VGG-19, ResNet-152, MobileNet-V2, Inception-V3, and EfficientNet-V2, where we match the state-of-the-art results on the Open Benchmark Chest X-ray datasets, with a sensitivity of 0.9954, specificity of 0.9886, the precision of 0.9880, F1-score of 0.9851, accuracy of 0.9816, and receiver operating characteristic (ROC) of the area under the curve (AUC) of 0.9781 (ROC-AUC score). © 2022 ACM.

5.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12469, 2023.
Article in English | Scopus | ID: covidwho-20242921

ABSTRACT

Medical Imaging and Data Resource Center (MIDRC) has been built to support AI-based research in response to the COVID-19 pandemic. One of the main goals of MIDRC is to make data collected in the repository ready for AI analysis. Due to data heterogeneity, there is a need to standardize data and make data-mining easier. Our study aims to stratify imaging data according to underlying anatomy using open-source image processing tools. The experiments were performed using Google Colaboratory on computed tomography (CT) imaging data available from the MIDRC. We adopted the existing open-source tools to process CT series (N=389) to define the image sub-volumes according to body part classification, and additionally identified series slices containing specific anatomic landmarks. Cases with automatically identified chest regions (N=369) were then processed to automatically segment the lungs. In order to assess the accuracy of segmentation, we performed outlier analysis using 3D shape radiomics features extracted from the left and right lungs. Standardized DICOM objects were created to store the resulting segmentations, regions, landmarks and radiomics features. We demonstrated that the MIDRC chest CT collections can be enriched using open-source analysis tools and that data available in MIDRC can be further used to evaluate the robustness of publicly available tools. © 2023 SPIE.

6.
ACM International Conference Proceeding Series ; : 12-21, 2022.
Article in English | Scopus | ID: covidwho-20242817

ABSTRACT

The global COVID-19 pandemic has caused a health crisis globally. Automated diagnostic methods can control the spread of the pandemic, as well as assists physicians to tackle high workload conditions through the quick treatment of affected patients. Owing to the scarcity of medical images and from different resources, the present image heterogeneity has raised challenges for achieving effective approaches to network training and effectively learning robust features. We propose a multi-joint unit network for the diagnosis of COVID-19 using the joint unit module, which leverages the receptive fields from multiple resolutions for learning rich representations. Existing approaches usually employ a large number of layers to learn the features, which consequently requires more computational power and increases the network complexity. To compensate, our joint unit module extracts low-, same-, and high-resolution feature maps simultaneously using different phases. Later, these learned feature maps are fused and utilized for classification layers. We observed that our model helps to learn sufficient information for classification without a performance loss and with faster convergence. We used three public benchmark datasets to demonstrate the performance of our network. Our proposed network consistently outperforms existing state-of-the-art approaches by demonstrating better accuracy, sensitivity, and specificity and F1-score across all datasets. © 2022 ACM.

7.
Cmc-Computers Materials & Continua ; 74(2), 2023.
Article in English | Web of Science | ID: covidwho-20241775

ABSTRACT

Humankind is facing another deadliest pandemic of all times in history, caused by COVID-19. Apart from this challenging pandemic, World Health Organization (WHO) considers tuberculosis (TB) as a preeminent infectious disease due to its high infection rate. Generally, both TB and COVID-19 severely affect the lungs, thus hardening the job of medical practitioners who can often misidentify these diseases in the current situation. Therefore, the time of need calls for an immediate and meticulous automatic diagnostic tool that can accurately discriminate both diseases. As one of the preliminary smart health systems that examine three clinical states (COVID-19, TB, and normal cases), this study proposes an amalgam of image filtering, data-augmentation technique, transfer learning-based approach, and advanced deep-learning classifiers to effectively segregate these diseases. It first employed a generative adversarial network (GAN) and Crimmins speckle removal filter on X-ray images to overcome the issue of limited data and noise. Each pre-processed image is then converted into red, green, and blue (RGB) and Commission Internationale de l'Elcairage (CIE) color spaces from which deep fused features are formed by extracting relevant features using DenseNet121 and ResNet50. Each feature extractor extracts 1000 most useful features which are then fused and finally fed to two variants of recurrent neural network (RNN) classifiers for precise discrimination of three-clinical states. Comparative analysis showed that the proposed Bi-directional long-short-term-memory (Bi-LSTM) model dominated the long-short-term-memory (LSTM) network by attaining an overall accuracy of 98.22% for the three-class classification task, whereas LSTM hardly achieved 94.22% accuracy on the test dataset.

8.
2023 9th International Conference on Advanced Computing and Communication Systems, ICACCS 2023 ; : 1671-1675, 2023.
Article in English | Scopus | ID: covidwho-20241041

ABSTRACT

A chronic respiratory disease known as pneumonia can be devastating if it is not identified and treated in a timely manner. For successful treatment and better patient outcomes, pneumonia must be identified early and properly classified. Deep learning has recently demonstrated considerable promise in the area of medical imaging and has successfully applied for a few image-based diagnosis tasks, including the identification and classification of pneumonia. Pneumonia is a respiratory illness that produces pleural effusion (a condition in which fluids flood the lungs). COVID-19 is becoming the major cause of the global rise in pneumonia cases. Early detection of this disease provides curative therapy and increases the likelihood of survival. CXR (Chest X-ray) imaging is a common method of detecting and diagnosing pneumonia. Examining chest X-rays is a difficult undertaking that often results in variances and inaccuracies. In this study, we created an automatic pneumonia diagnosis method, also known as a CAD (Computer-Aided Diagnosis), which may significantly reduce the time and cost of collecting CXR imaging data. This paper uses deep learning which has the potential to revolutionize in the area of medical imaging and has shown promising results in the detection and classification of pneumonia. Further research and development in this area is needed to improve the accuracy and reliability of these models and make them more accessible to healthcare providers. These models can provide fast and accurate results, with high sensitivity and specificity in identifying pneumonia in chest X-rays. © 2023 IEEE.

9.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12465, 2023.
Article in English | Scopus | ID: covidwho-20240716

ABSTRACT

This paper proposes an automated classification method of COVID-19 chest CT volumes using improved 3D MLP-Mixer. Novel coronavirus disease 2019 (COVID-19) spreads over the world, causing a large number of infected patients and deaths. Sudden increase in the number of COVID-19 patients causes a manpower shortage in medical institutions. Computer-aided diagnosis (CAD) system provides quick and quantitative diagnosis results. CAD system for COVID-19 enables efficient diagnosis workflow and contributes to reduce such manpower shortage. In image-based diagnosis of viral pneumonia cases including COVID-19, both local and global image features are important because viral pneumonia cause many ground glass opacities and consolidations in large areas in the lung. This paper proposes an automated classification method of chest CT volumes for COVID-19 diagnosis assistance. MLP-Mixer is a recent method of image classification using Vision Transformer-like architecture. It performs classification using both local and global image features. To classify 3D CT volumes, we developed a hybrid classification model that consists of both a 3D convolutional neural network (CNN) and a 3D version of the MLP-Mixer. Classification accuracy of the proposed method was evaluated using a dataset that contains 1205 CT volumes and obtained 79.5% of classification accuracy. The accuracy was higher than that of conventional 3D CNN models consists of 3D CNN layers and simple MLP layers. © 2023 SPIE.

10.
International Conference on Enterprise Information Systems, ICEIS - Proceedings ; 1:675-682, 2023.
Article in English | Scopus | ID: covidwho-20239737

ABSTRACT

In this proposal, a study based on deep-learned features via transfer learning was developed to obtain a set of features and techniques for pattern recognition in the context of COVID-19 images. The proposal was based on the ResNet-50, DenseNet-201 and EfficientNet-b0 deep-learning models. In this work, the chosen layer for analysis was the avg pool layer from each model, with 2048 features from the ResNet-50, 1920 features from the DenseNet0201 and 1280 obtained features from the EfficientNet-b0. The most relevant descriptors were defined for the classification process, applying the ReliefF algorithm and two classification strategies: individually applied classifiers and employed an ensemble of classifiers using the score-level fusion approach. Thus, the two best combinations were identified, both using the DenseNet-201 model with the same subset of features. The first combination was defined via the SMO classifier (accuracy of 98.38%) and the second via the ensemble strategy (accuracy of 97.89%). The feature subset was composed of only 210 descriptors, representing only 10% of the original set. The strategies and information presented here are relevant contributions for the specialists interested in the study and development of computer-aided diagnosis in COVID-19 images. Copyright © 2023 by SCITEPRESS - Science and Technology Publications, Lda. Under CC license (CC BY-NC-ND 4.0)

11.
Proceedings of SPIE - The International Society for Optical Engineering ; 12602, 2023.
Article in English | Scopus | ID: covidwho-20238790

ABSTRACT

With the COVID-19 outbreak in 2019, the world is facing a major crisis and people's health is at serious risk. Accurate segmentation of lesions in CT images can help doctors understand disease infections, prescribe the right medicine and control patients' conditions. Fast and accurate diagnosis not only can make the limited medical resources get reasonable allocation, but also can control the spread of disease, and computer-aided diagnosis can achieve this purpose, so this paper proposes a deep learning segmentation network LLDSNet based on a small amount of data, which is divided into two modules: contextual feature-aware module (CFAM) and shape edge detection module (SEDM). Due to the different morphology of lesions in different CT, lesions with dispersion, small lesion area and background area imbalance, lesion area and normal area boundary blurred, etc. The problem of lesion segmentation in COVID-19 poses a major challenge. The CFAM can effectively extract the overall and local features, and the SEDM can accurately find the edges of the lesion area to segment the lesions in this area. The hybrid loss function is used to avoid the class imbalance problem and improve the overall network performance. It is demonstrated that LLDSNet dice achieves 0.696 for a small number of data sets, and the best performance compared to five currently popular segmentation networks. © 2023 SPIE.

12.
Proceedings of SPIE - The International Society for Optical Engineering ; 12566, 2023.
Article in English | Scopus | ID: covidwho-20238616

ABSTRACT

Computer-aided diagnosis of COVID-19 from lung medical images has received increasing attention in previous clinical practice and research. However, developing such automatic model is usually challenging due to the requirement of a large amount of data and sufficient computer power. With only 317 training images, this paper presents a Classic Augmentation based Classifier Generative Adversarial Network (CACGAN) for data synthetising. In order to take into account, the feature extraction ability and lightness of the model for lung CT images, the CACGAN network is mainly constructed by convolution blocks. During the training process, each iteration will update the discriminator's network parameters twice and the generator's network parameters once. For the evaluation of CACGAN. This paper organized multiple comparison between each pair from CACGAN synthetic data, classic augmented data, and original data. In this paper, 7 classifiers are built, ranging from simple to complex, and are trained for the three sets of data respectively. To control the variable, the three sets of data use the exact same classifier structure and the exact same validation dataset. The result shows the CACGAN successfully learned how to synthesize new lung CT images with specific labels. © 2023 SPIE.

13.
Research on Biomedical Engineering ; 2023.
Article in English | Scopus | ID: covidwho-20236113

ABSTRACT

Purpose: In December 2019, the Covid-19 pandemic began in the world. To reduce mortality, in addiction to mass vaccination, it is necessary to massify and accelerate clinical diagnosis, as well as creating new ways of monitoring patients that can help in the construction of specific treatments for the disease. Objective: In this work, we propose rapid protocols for clinical diagnosis of COVID-19 through the automatic analysis of hematological parameters using evolutionary computing and machine learning. These hematological parameters are obtained from blood tests common in clinical practice. Method: We investigated the best classifier architectures. Then, we applied the particle swarm optimization algorithm (PSO) to select the most relevant attributes: serum glucose, troponin, partial thromboplastin time, ferritin, D-dimer, lactic dehydrogenase, and indirect bilirubin. Then, we assessed again the best classifier architectures, but now using the reduced set of features. Finally, we used decision trees to build four rapid protocols for Covid-19 clinical diagnosis by assessing the impact of each selected feature. The proposed system was used to support clinical diagnosis and assessment of disease severity in patients admitted to intensive and semi-intensive care units as a case study in the city of Paudalho, Brazil. Results: We developed a web system for Covid-19 diagnosis support. Using a 100-tree random forest, we obtained results for accuracy, sensitivity, and specificity superior to 99%. After feature selection, results were similar. The four empirical clinical protocols returned accuracies, sensitivities and specificities superior to 98%. Conclusion: By using a reduced set of hematological parameters common in clinical practice, it was possible to achieve results of accuracy, sensitivity, and specificity comparable to those obtained with RT-PCR. It was also possible to automatically generate clinical decision protocols, allowing relatively accurate clinical diagnosis even without the aid of the web decision support system. © 2023, The Author(s), under exclusive licence to The Brazilian Society of Biomedical Engineering.

14.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12467, 2023.
Article in English | Scopus | ID: covidwho-20235035

ABSTRACT

MIDRC was created to facilitate machine learning research for tasks including early detection, diagnosis, prognosis, and assessment of treatment response related to the COVID-19 pandemic and beyond. The purpose of the Technology Development Project (TDP) 3c is to create resources to assist researchers in evaluating the performance of their machine learning algorithms. An interactive decision tree has been developed, organized by the type of task that the machine learning algorithm is being trained to perform. The user can select information such as: (a) the type of task, (b) the nature of the reference standard, and (c) the type of the algorithm output. Based on the user responses, they can obtain recommendations regarding appropriate performance evaluation approaches and metrics, including literature references, short video tutorials, and links to available software. Five tasks have been identified for the decision tree: (a) classification, (b) detection/localization, (c) segmentation, (d) time-to-event analysis, and (e) estimation. As an example, the classification branch of the decision tree includes binary and multi-class classification tasks and provides suggestions for methods and metrics as well as software recommendations, and literature references for situations where the algorithm produces either binary or non-binary (e.g., continuous) output and for reference standards with negligible or non-negligible variability and unreliability. The decision tree has been made publicly available on the MIDRC website to assist researchers in conducting task-specific performance evaluations, including classification, detection/localization, segmentation, estimation, and time-to-event tasks. © COPYRIGHT SPIE. Downloading of the is permitted for personal use only.

15.
Digit Health ; 9: 20552076231180054, 2023.
Article in English | MEDLINE | ID: covidwho-20232672

ABSTRACT

Objective: Recently, monkeypox virus is slowly evolving and there are fears it will spread as COVID-19. Computer-aided diagnosis (CAD) based on deep learning approaches especially convolutional neural network (CNN) can assist in the rapid determination of reported incidents. The current CADs were mostly based on an individual CNN. Few CADs employed multiple CNNs but did not investigate which combination of CNNs has a greater impact on the performance. Furthermore, they relied on only spatial information of deep features to train their models. This study aims to construct a CAD tool named "Monkey-CAD" that can address the previous limitations and automatically diagnose monkeypox rapidly and accurately. Methods: Monkey-CAD extracts features from eight CNNs and then examines the best possible combination of deep features that influence classification. It employs discrete wavelet transform (DWT) to merge features which diminishes fused features' size and provides a time-frequency demonstration. These deep features' sizes are then further reduced via an entropy-based feature selection approach. These reduced fused features are finally used to deliver a better representation of the input features and feed three ensemble classifiers. Results: Two freely accessible datasets called Monkeypox skin image (MSID) and Monkeypox skin lesion (MSLD) are employed in this study. Monkey-CAD could discriminate among cases with and without Monkeypox achieving an accuracy of 97.1% for MSID and 98.7% for MSLD datasets respectively. Conclusions: Such promising results demonstrate that the Monkey-CAD can be employed to assist health practitioners. They also verify that fusing deep features from selected CNNs can boost performance.

16.
Knowl Inf Syst ; : 1-41, 2023 May 24.
Article in English | MEDLINE | ID: covidwho-20230732

ABSTRACT

The diagnostic phase of the treatment process is essential for patient guidance and follow-up. The accuracy and effectiveness of this phase can determine the life or death of a patient. For the same symptoms, different doctors may come up with different diagnoses whose treatments may, instead of curing a patient, be fatal. Machine learning (ML) brings new solutions to healthcare professionals to save time and optimize the appropriate diagnosis. ML is a data analysis method that automates the creation of analytical models and promotes predictive data. There are several ML models and algorithms that rely on features extracted from, for example, a patient's medical images to indicate whether a tumor is benign or malignant. The models differ in the way they operate and the method used to extract the discriminative features of the tumor. In this article, we review different ML models for tumor classification and COVID-19 infection to evaluate the different works. The computer-aided diagnosis (CAD) systems, which we referred to as classical, are based on accurate feature identification, usually performed manually or with other ML techniques that are not involved in classification. The deep learning-based CAD systems automatically perform the identification and extraction of discriminative features. The results show that the two types of DAC have quite close performances but the use of one or the other type depends on the datasets. Indeed, manual feature extraction is necessary when the size of the dataset is small; otherwise, deep learning is used.

17.
15th International Conference on Developments in eSystems Engineering, DeSE 2023 ; 2023-January:363-368, 2023.
Article in English | Scopus | ID: covidwho-2327175

ABSTRACT

To restrict the virus's transmission in the pandemic and lessen the strain on the healthcare industry, computer-assisted diagnostics for the accurate and speedy diagnosis of coronavirus illness (COVID-19) has become a prerequisite. Compared to other types of imaging and detection, chest X-ray imaging (CXR) provides several advantages. Healthcare practitioners may profit from any technology instrument providing quick and accurate COVID-19 infection detection. COVID-LiteNet is a technique suggested in this paper that combines white balance with Contrast Limited Adaptive Histogram Equalization (CLAHE) and a convolutional neural network (CNN). White balance is employed as an image pre-processing step in this approach, followed by CLAHE, to improve the visibility of CXR images, and CNN is trained using sparse categorical cross-entropy for image classification tasks and gives the smaller parameters file size, i.e., 2.24 MB. The suggested COVID-LiteNet technique produced better results than vanilla CNN with no pre-processing. The proposed approach outperformed several state-of-the-art methods with a binary classification accuracy of 98.44 percent and a multi-class classification accuracy of 97.50 percent. COVID-LiteNet, the suggested technique, outperformed the competition on various performance parameters. COVID-LiteNet may help radiologists discover COVID-19 patients from CXR pictures by providing thorough model interpretations, cutting diagnostic time significantly. © 2023 IEEE.

18.
2nd International Conference on Sustainable Computing and Data Communication Systems, ICSCDS 2023 ; : 413-419, 2023.
Article in English | Scopus | ID: covidwho-2326495

ABSTRACT

Deep learning has been widely used to analyze radiographic pictures such as chest scans. These radiographic pictures include a wealth of information, including patterns and cluster-like formations, which aid in the discovery and conformance of COVID-19-like pandemics. The COVID-19 pandemic is wreaking havoc on global well-being and public health. Until present, more than 27 million confirmed cases have been recorded globally. Due to the increasing number of confirmed cases and issues with COVID-19 variants, fast and accurate categorization of healthy and infected individuals is critical for COVID-19 management and treatment. In medical image analysis and classification, artificial intelligence (AI) approaches in general, and region-based convolutional neural networks (CNNs) in particular, have yielded promising results. In this study, a deep Mask R-CNN architecture based on chest image classification is suggested for the diagnosis of COVID-19. An effective and reliable Mask R-CNN classification was difficult due to a lack of sufficient size and high-quality chest image datasets. These complications are addressed with Mask Region-based convolutional neural networks (R-CNNs) as a framework for detecting COVID-19 patients from chest pictures using an open-source dataset. First, the model was evaluated using 100 photos from the original processed dataset, and it was found to be accurate. The model was then validated against an independent dataset of COVID-19 X-ray pictures. The suggested model outperformed all other models in general and specifically when tested using an independent testing set. © 2023 IEEE.

19.
1st International Conference on Recent Trends in Microelectronics, Automation, Computing and Communications Systems, ICMACC 2022 ; : 167-173, 2022.
Article in English | Scopus | ID: covidwho-2325759

ABSTRACT

Lung segmentation is a process of detection and identification of lung cancer and pneumonia with the help of image processing techniques. Deep learning algorithms can be incorporated to build the computer-aided diagnosis (CAD) system for detecting or recognizing broad objects like acute respiratory distress syndrome (ARDS), Tuberculosis, Pneumonia, Lung cancer, Covid, and several other respiratory diseases. This paper presents pneumonia detection from lung segmentation using deep learning methods on chest radiography. Chest X-ray is the most useful technique among other existing techniques, due to its lesser cost. The main drawback of a chest x-ray is that it cannot detect all problems in the chest. Thus, implementing convolutional neural networks (CNN) to perform lung segmentation and to obtain correct results. The 'lost' regions of the lungs are reconstructed by an automatic segmentation method from raw images of chest X-ray. © 2022 IEEE.

20.
2023 IEEE International Conference on Intelligent and Innovative Technologies in Computing, Electrical and Electronics, ICIITCEE 2023 ; : 1084-1089, 2023.
Article in English | Scopus | ID: covidwho-2319509

ABSTRACT

A developing virus called COVID-19 infects the lungs and upper layer respiratory system. Medical imaging and PCR assays can be used to identify COVID-19. Medical images are used to identify COVID-19 diseases in the proposed classification model, which works well. A crucial step in the battle against this fatal illness may turn out to be an efficient screening and diagnostic phase in treating infected sufferers. Chest X-ray (CXR) scans could be used to do this. The utilization of chest X-ray imaging for early detection may prove to be a crucial strategy in the fight against COVID-19. Many computer- aided diagnostic (CAD) methods have been developed to help radiologists and provide them with more information for the same. In a training network with many classes, tertiary classification starts to become more accurate as the number of classes increases. © 2023 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL